
An experiment conducted with ChatGPT revealed that the artificial intelligence appears to be capable of identifying when someone is lying.
The chatbot explained that people who lie “do not use a universally fixed set of words” and seem to use more words associated with lying.
While the AI admitted that simply using these words alone does not prove that someone is lying, this information, when analyzed together with body language cues, proves effective in detecting lies.
After the tests, conducted by the Argentine portal La Nación, ChatGPT also shared a list of the most commonly used terms by liars as a subtle method of manipulating facts.
The chatbot cited terms such as “Honestly” and “I’m telling the truth” as ways liars try to give credibility to a false narrative.
“To be honest” is used to convey trust, “I swear” can be used as a response to disbelief, while “Never” or “Always” are used as exaggerations to make the lie seem more convincing.
“If I’m not mistaken” and “I don’t remember well” convey ambiguity, which is used as an escape route if the liar is challenged.
Phrases like “Why would I lie to you?” and “As I said before” act as manipulation, making you doubt yourself and your own memory.
Finally, terms like “People say” are used when the person intends to hide behind vague sources instead of giving a direct answer, and “That doesn’t make sense” is often used when the liar discredits the other’s logic instead of providing a defense to the argument.
Photo and video: Unsplash. This content was created with the help of AI and reviewed by the editorial team.
